You're out of free questions.

Upgrade now

You have a function rand7() that generates a random integer from 1 to 7. Use it to write a function rand5() that generates a random integer from 1 to 5.

rand7() returns each integer with equal probability. rand5() must also return each integer with equal probability.

Gotchas

Your first thought might be to simply take the result of rand7() and take a modulus:

  def rand5():
    return rand7() % 5 + 1

However, this won't give an equal probability for each possible result. We can write out each possible result from rand7() (each of which is equally probable, per the problem statement) and see that some results for rand5() are more likely because they are caused by more results from rand7():

rand7() rand5()
1 2
2 3
3 4
4 5
5 1
6 2
7 3

So we see that there are two ways to get 2 and 3, but only one way to get 1, 4, or 5. This makes 2 and 3 twice as likely as the others.

What about calling rand7() five times, summing up the result, and then taking the modulus?

This is really close to uniform, but not quite. Since we're calling rand7() five times, there are 75=16,8077^5 = 16,807 possible results. That's not divisible by five, so some outcomes must be more likely than others. (If you're curious, 1 is the result 3,357 times; 2 and 5 are the result 3,360 times each; and 3 and 4 are the result 3,365 times.)

In fact, no matter how many times we run rand7(), we'll never get a number of outcomes that's divisible by five.

It turns out every integer can be expressed as a product of prime numbers (its prime factorization

Every number can be expressed as a product of prime numbers. This is called its prime factorization.

For example:

8=222 8 = 2 * 2 * 2 15=53 15 = 5 * 3 864=22222333 864 = 2 * 2 * 2 * 2 * 2 * 3 * 3 * 3 13=13 13 = 13

Every positive integer has only one prime factorization. This is called the "fundamental theorem of arithmetic."

). It also turns out that every integer has only one prime factorization.

Since seven is already prime, any number that can be expressed as 7n7^n (where nn is a positive integer) will have a prime factorization that is all sevens. For example, here are the prime factorizations for 72,73,747^2, 7^3, 7^4:

72=49=77 7^2 = 49 = 7 * 7 73=343=777 7^3 = 343 = 7 * 7 * 7 74=2401=7777 7^4 = 2401 = 7 * 7 * 7 * 7

Five is also prime, so if any power of seven were divisible by five, five would be in its prime factorization. But five can't be in its prime factorization because its prime factorization is all sevens (and it has only one prime factorization). So no power of seven is divisible by five.

The answer takes worst-case infinite time. However, we can get away with worst-case O(1)O(1) space. Does your answer have a non-constant space cost? If you're using recursion (and your language doesn't have tail-call optimization

Some compilers and interpreters will do what's called "tail call optimization" (TCO), where it can optimize some recursive functions to avoid building up a tall call stack. Python and Java decidedly do not use TCO. Some Ruby implementations do, but most don't. Scheme is one of the few languages that guarantee TCO in all implementations. In general, best not to assume your compiler/interpreter will do this work for you.

), you're potentially incurring a worst-case infinite space cost in the call stack.

Overview

The call stack is what a program uses to keep track of function calls. The call stack is made up of stack frames—one for each function call.

For instance, say we called a function that rolled two dice and printed the sum.

  def roll_die():
    return random.randint(1, 6)

def roll_two_and_sum():
    total = 0
    total += roll_die()
    total += roll_die()
    print(total)

roll_two_and_sum()

First, our program calls roll_two_and_sum(). It goes on the call stack:

roll_two_and_sum()

That function calls roll_die(), which gets pushed on to the top of the call stack:

roll_die()
roll_two_and_sum()

Inside of roll_die(), we call random.randint(). Here's what our call stack looks like then:

random.randint()
roll_die()
roll_two_and_sum()

When random.randint() finishes, we return back to roll_die() by removing ("popping") random.randint()'s stack frame.

roll_die()
roll_two_and_sum()

Same thing when roll_die() returns:

roll_two_and_sum()

We're not done yet! roll_two_and_sum() calls roll_die() again:

roll_die()
roll_two_and_sum()

Which calls random.randint() again:

random.randint()
roll_die()
roll_two_and_sum()

random.randint() returns, then roll_die() returns, putting us back in roll_two_and_sum():

roll_two_and_sum()

Which calls print()():

print()()
roll_two_and_sum()

What's stored in a stack frame?

What actually goes in a function's stack frame?

A stack frame usually stores:

  • Local variables
  • Arguments passed into the function
  • Information about the caller's stack frame
  • The return address—what the program should do after the function returns (i.e.: where it should "return to"). This is usually somewhere in the middle of the caller's code.

Some of the specifics vary between processor architectures. For instance, AMD64 (64-bit x86) processors pass some arguments in registers and some on the call stack. And, ARM processors (common in phones) store the return address in a special register instead of putting it on the call stack.

The Space Cost of Stack Frames

Each function call creates its own stack frame, taking up space on the call stack. That's important because it can impact the space complexity of an algorithm. Especially when we use recursion.

For example, if we wanted to multiply all the numbers between 11 and nn, we could use this recursive approach:

  def product_1_to_n(n):
    return 1 if n <= 1 else n * product_1_to_n(n - 1)

What would the call stack look like when n = 10?

First, product_1_to_n() gets called with n = 10:

    product_1_to_n()    n = 10

This calls product_1_to_n() with n = 9.

    product_1_to_n()    n = 9
    product_1_to_n()    n = 10

Which calls product_1_to_n() with n = 8.

    product_1_to_n()    n = 8
    product_1_to_n()    n = 9
    product_1_to_n()    n = 10

And so on until we get to n = 1.

    product_1_to_n()    n = 1
    product_1_to_n()    n = 2
    product_1_to_n()    n = 3
    product_1_to_n()    n = 4
    product_1_to_n()    n = 5
    product_1_to_n()    n = 6
    product_1_to_n()    n = 7
    product_1_to_n()    n = 8
    product_1_to_n()    n = 9
    product_1_to_n()    n = 10

Look at the size of all those stack frames! The entire call stack takes up O(n)O(n) space. That's right—we have an O(n)O(n) space cost even though our function itself doesn't create any data structures!

What if we'd used an iterative approach instead of a recursive one?

  def product_1_to_n(n):
    # We assume n >= 1
    result = 1
    for num in range(1, n + 1):
        result *= num

    return result

This version takes a constant amount of space. At the beginning of the loop, the call stack looks like this:

    product_1_to_n()    n = 10, result = 1, num = 1

As we iterate through the loop, the local variables change, but we stay in the same stack frame because we don't call any other functions.

    product_1_to_n()    n = 10, result = 2, num = 2

    product_1_to_n()    n = 10, result = 6, num = 3

    product_1_to_n()    n = 10, result = 24, num = 4

In general, even though the compiler or interpreter will take care of managing the call stack for you, it's important to consider the depth of the call stack when analyzing the space complexity of an algorithm.

Be especially careful with recursive functions! They can end up building huge call stacks.

What happens if we run out of space? It's a stack overflow! In Python 3.6, you'll get a RecursionError.

If the very last thing a function does is call another function, then its stack frame might not be needed any more. The function could free up its stack frame before doing its final call, saving space.

This is called tail call optimization (TCO). If a recursive function is optimized with TCO, then it may not end up with a big call stack.

In general, most languages don't provide TCO. Scheme is one of the few languages that guarantee tail call optimization. Some Ruby, C, and Javascript implementations may do it. Python and Java decidedly don't.

But replacing your recursion with a loop avoids this.

Breakdown

rand5() must return each integer with equal probability, but we don't need to make any guarantees about its runtime...

In fact, the solution has a small possibility of never returning...

Solution

We simply "re-roll" whenever we get a number greater than 5.

  def rand5():
    result = 7  # arbitrarily large
    while result > 5:
        result = rand7()
    return result

So each integer 1,2,3,4, or 5 has a probability 17\frac{1}{7} of appearing at each roll.

Complexity

Worst-case O()O(\infty) time (we might keep re-rolling forever) and O(1)O(1) space.

Note that if we weren't worried about the potential space cost (nor the potential stack overflow

Overview

The call stack is what a program uses to keep track of function calls. The call stack is made up of stack frames—one for each function call.

For instance, say we called a function that rolled two dice and printed the sum.

  def roll_die():
    return random.randint(1, 6)

def roll_two_and_sum():
    total = 0
    total += roll_die()
    total += roll_die()
    print(total)

roll_two_and_sum()

First, our program calls roll_two_and_sum(). It goes on the call stack:

roll_two_and_sum()

That function calls roll_die(), which gets pushed on to the top of the call stack:

roll_die()
roll_two_and_sum()

Inside of roll_die(), we call random.randint(). Here's what our call stack looks like then:

random.randint()
roll_die()
roll_two_and_sum()

When random.randint() finishes, we return back to roll_die() by removing ("popping") random.randint()'s stack frame.

roll_die()
roll_two_and_sum()

Same thing when roll_die() returns:

roll_two_and_sum()

We're not done yet! roll_two_and_sum() calls roll_die() again:

roll_die()
roll_two_and_sum()

Which calls random.randint() again:

random.randint()
roll_die()
roll_two_and_sum()

random.randint() returns, then roll_die() returns, putting us back in roll_two_and_sum():

roll_two_and_sum()

Which calls print()():

print()()
roll_two_and_sum()

What's stored in a stack frame?

What actually goes in a function's stack frame?

A stack frame usually stores:

  • Local variables
  • Arguments passed into the function
  • Information about the caller's stack frame
  • The return address—what the program should do after the function returns (i.e.: where it should "return to"). This is usually somewhere in the middle of the caller's code.

Some of the specifics vary between processor architectures. For instance, AMD64 (64-bit x86) processors pass some arguments in registers and some on the call stack. And, ARM processors (common in phones) store the return address in a special register instead of putting it on the call stack.

The Space Cost of Stack Frames

Each function call creates its own stack frame, taking up space on the call stack. That's important because it can impact the space complexity of an algorithm. Especially when we use recursion.

For example, if we wanted to multiply all the numbers between 11 and nn, we could use this recursive approach:

  def product_1_to_n(n):
    return 1 if n <= 1 else n * product_1_to_n(n - 1)

What would the call stack look like when n = 10?

First, product_1_to_n() gets called with n = 10:

    product_1_to_n()    n = 10

This calls product_1_to_n() with n = 9.

    product_1_to_n()    n = 9
    product_1_to_n()    n = 10

Which calls product_1_to_n() with n = 8.

    product_1_to_n()    n = 8
    product_1_to_n()    n = 9
    product_1_to_n()    n = 10

And so on until we get to n = 1.

    product_1_to_n()    n = 1
    product_1_to_n()    n = 2
    product_1_to_n()    n = 3
    product_1_to_n()    n = 4
    product_1_to_n()    n = 5
    product_1_to_n()    n = 6
    product_1_to_n()    n = 7
    product_1_to_n()    n = 8
    product_1_to_n()    n = 9
    product_1_to_n()    n = 10

Look at the size of all those stack frames! The entire call stack takes up O(n)O(n) space. That's right—we have an O(n)O(n) space cost even though our function itself doesn't create any data structures!

What if we'd used an iterative approach instead of a recursive one?

  def product_1_to_n(n):
    # We assume n >= 1
    result = 1
    for num in range(1, n + 1):
        result *= num

    return result

This version takes a constant amount of space. At the beginning of the loop, the call stack looks like this:

    product_1_to_n()    n = 10, result = 1, num = 1

As we iterate through the loop, the local variables change, but we stay in the same stack frame because we don't call any other functions.

    product_1_to_n()    n = 10, result = 2, num = 2

    product_1_to_n()    n = 10, result = 6, num = 3

    product_1_to_n()    n = 10, result = 24, num = 4

In general, even though the compiler or interpreter will take care of managing the call stack for you, it's important to consider the depth of the call stack when analyzing the space complexity of an algorithm.

Be especially careful with recursive functions! They can end up building huge call stacks.

What happens if we run out of space? It's a stack overflow! In Python 3.6, you'll get a RecursionError.

If the very last thing a function does is call another function, then its stack frame might not be needed any more. The function could free up its stack frame before doing its final call, saving space.

This is called tail call optimization (TCO). If a recursive function is optimized with TCO, then it may not end up with a big call stack.

In general, most languages don't provide TCO. Scheme is one of the few languages that guarantee tail call optimization. Some Ruby, C, and Javascript implementations may do it. Python and Java decidedly don't.

) of recursion, we could use an arguably-more-readable recursive approach with O()O(\infty) space cost:

  def rand5():
    result = rand7()
    return result if result <= 5 else rand5()

Bonus

This kind of math is generally outside the scope of a coding interview, but: if you know a bit of number theory you can prove that there exists no solution which is guaranteed to terminate. Hint: it follows from the fundamental theorem of arithmetic.

Every number can be expressed as a product of prime numbers. This is called its prime factorization.

For example:

8=222 8 = 2 * 2 * 2 15=53 15 = 5 * 3 864=22222333 864 = 2 * 2 * 2 * 2 * 2 * 3 * 3 * 3 13=13 13 = 13

Every positive integer has only one prime factorization. This is called the "fundamental theorem of arithmetic."

What We Learned

Making sure each possible result has the same probability is a big part of what makes this one tricky.

If you're ever struggling with the math to figure something like that out, don't be afraid to just count. As in, write out all the possible results from rand7(), and label each one with its final outcome for rand5(). Then count up how many ways there are to make each final outcome. If the counts aren't the same, the function isn't uniformly random.

Do you have an answer?

Wanna review this one again later? Or do you feel like you got it all?

Mark as done Pin for review later
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
import random
def rand7():
return random.randint(1, 7)
def rand5():
# Implement rand5() using rand7()
return 0
print('Rolling 5-sided die...')
print(rand5())
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

Reset editor

Powered by qualified.io

. . .